25 research outputs found

    Detailed comparative analysis of PESQ and VISQOL behaviour in the context of playout delay adjustments introduced by VOIP jitter buffer algorithms

    Get PDF
    The default best-effort Internet presents significant challenges for delay-sensitive applications such as VoIP. To cope with non determinism, receiver playout strategies are utilised in VoIP applications that adapt to network condition. Such strategies can be divided into two different groups, namely per-talkspurt and per-packet. The former make use of silence periods within natural speech and adapt such silences to track network conditions, thus preserving the integrity of active speech talkspurts. Examples of this approach are described in [1, 2]. Per packet strategies are different in that adjustments are made both during silence periods and during talkspurts by time-scaling of packets, a technique also known in the literature as time-warping. This approach is more effective in coping with short network delay changes because the per talkspurt approach can only adapt during recognized silences even though the duration of many delay spikes may be less than that of a talkspurt. This approach however introduces potential degradation caused by the scaling of speech packets. Examples of this approach are described in [3, 4] and such techniques are frequently deployed in popular VoIP applications such as GoogleTalk and Skype. In this research, we focus on applications that deploy per talkspurt strategies, which are commonly found in current telecommunication networks

    Detailed comparative analysis of PESQ and VISQOL behaviour in the context of playout delay adjustments introduced by VOIP jitter buffer algorithms

    Get PDF
    The default best-effort Internet presents significant challenges for delay-sensitive applications such as VoIP. To cope with non determinism, receiver playout strategies are utilised in VoIP applications that adapt to network condition. Such strategies can be divided into two different groups, namely per-talkspurt and per-packet. The former make use of silence periods within natural speech and adapt such silences to track network conditions, thus preserving the integrity of active speech talkspurts. Examples of this approach are described in [1, 2]. Per packet strategies are different in that adjustments are made both during silence periods and during talkspurts by time-scaling of packets, a technique also known in the literature as time-warping. This approach is more effective in coping with short network delay changes because the per talkspurt approach can only adapt during recognized silences even though the duration of many delay spikes may be less than that of a talkspurt. This approach however introduces potential degradation caused by the scaling of speech packets. Examples of this approach are described in [3, 4] and such techniques are frequently deployed in popular VoIP applications such as GoogleTalk and Skype. In this research, we focus on applications that deploy per talkspurt strategies, which are commonly found in current telecommunication networks

    Impact of the duration of speech sequences on speech quality, Journal of Telecommunications and Information Technology, 2007, nr 4

    Get PDF
    This paper describes simulations of speech sequences transmission for intrusive measurement of voice transmission quality of service (VTQoS) in the environment of IP networks. The aim of the simulations was to investigate the impact of the different durations of speech sequences on speech quality from the jitter rate and packet loss point of view in IP networks. The ITU-T G.729 and ITU-T G.723.1 encoding schemes were used for the purpose of the simulations. The assessment of speech quality was realized by means of perceptual evaluation of speech quality (PESQ) algorithm. A comparison of the impact of different durations of speech sequences on speech quality and determination of the optimal duration of speech sequence for measurements of speech quality in telecommunication networks, is the aim of this paper

    How to Train No Reference Video Quality Measures for New Coding Standards using Existing Annotated Datasets?

    Get PDF
    Subjective experiments are important for developing objective Video Quality Measures (VQMs). However, they are time-consuming and resource-demanding. In this context, being able to reuse existing subjective data on previous video coding standards to train models capable of predicting the perceptual quality of video content processed with newer codecs acquires significant importance. This paper investigates the possibility of generating an HEVC encoded Processed Video Sequence (PVS) in such a way that its perceptual quality is as similar as possible to that of an AVC encoded PVS whose quality has already been assessed by human subjects. In this way, the perceptual quality of the newly generated HEVC encoded PVS may be annotated approximately with the Mean Opinion Score (MOS) of the related AVC encoded PVS. To show the effectiveness of our approach, we compared the performance of a simple and low complexity but yet effective no reference hybrid model trained on the data generated with our approach with the same model trained on data collected in the context of a pristine subjective experiment. In addition, we merged seven subjective experiments such that they can be used as one aligned dataset containing either original HEVC bitstreams or the newly generated data explained in our proposed approach. The merging process accounts for the differences in terms of quality scale, chosen assessment method and context influence factors. This yields a large annotated dataset of HEVC sequences that is made publicly available for the design and training of no reference hybrid VQMs for HEVC encoded content

    MPEG DASH - some QoE-based insights into the tradeoff between audio and video for live music concert streaming under congested network conditions

    Get PDF
    The rapid adoption of MPEG-DASH is testament to its core design principles that enable the client to make the informed decision relating to media encoding representations, based on network conditions, device type and preferences. Typically, the focus has mostly been on the different video quality representations rather than audio. However, for device types with small screens, the relative bandwidth budget difference allocated to the two streams may not be that large. This is especially the case if high quality audio is used, and in this scenario, we argue that increased focus should be given to the bit rate representations for audio. Arising from this, we have designed and implemented a subjective experiment to evaluate and analyses the possible effect of using different audio quality levels. In particular, we investigate the possibility of providing reduced audio quality so as to free up bandwidth for video under certain conditions. Thus, the experiment was implemented for live music concert scenarios transmitted over mobile networks, and we suggest that the results will be of significant interest to DASH content creators when considering bandwidth tradeoff between audio and video.info:eu-repo/semantics/publishedVersio

    Modeling and estimating the subjects’ diversity of opinions in video quality assessment: a neural network based approach

    Get PDF
    Subjective experiments are considered the most reliable way to assess the perceived visual quality. However, observers’ opinions are characterized by large diversity: in fact, even the same observer is often not able to exactly repeat his first opinion when rating again a given stimulus. This makes the Mean Opinion Score (MOS) alone, in many cases, not sufficient to get accurate information about the perceived visual quality. To this aim, it is important to have a measure characterizing to what extent the observed or predicted MOS value is reliable and stable. For instance, the Standard deviation of the Opinions of the Subjects (SOS) could be considered as a measure of reliability when evaluating the quality subjectively. However, we are not aware of the existence of models or algorithms that allow to objectively predict how much diversity would be observed in subjects’ opinions in terms of SOS. In this work we observe, on the basis of a statistical analysis made on several subjective experiments, that the disagreement between the quality as measured by means of different objective video quality metrics (VQMs) can provide information on the diversity of the observers’ ratings on a given processed video sequence (PVS). In light of this observation we: i) propose and validate a model for the SOS observed in a subjective experiment; ii) design and train Neural Networks (NNs) that predict the average diversity that would be observed among the subjects’ ratings for a PVS starting from a set of VQMs values computed on such a PVS; iii) give insights into how the same NN based approach can be used to identify potential anomalies in the data collected in subjective experiments

    Audiovisual Quality of Live Music Streaming over Mobile Networks using MPEG-DASH

    Get PDF
    The MPEG-DASH protocol has been rapidly adopted by most major network content providers and enables clients to make informed decisions in the context of HTTP streaming, based on network and device conditions using the available media representations. A review of the literature on adaptive streaming over mobile shows that most emphasis has been on adapting the video quality whereas this work examines the trade-off between video and audio quality. In particular, subjective tests were undertaken for live music streaming over emulated mobile networks with MPEG-DASH. A group of audio/video sequences was designed to emulate varying bandwidth arising from network congestion, with varying trade-off between audio and video bit rates. Absolute Category Rating was used to evaluate the relative impact of both audio and video quality in the overall Quality of Experience (QoE). One key finding from the statistical analysis of Mean Opinion Scores (MOS) results using Analysis of Variance indicates that providing reduced audio quality has a much lower impact on QoE than reducing video quality at similar total bandwidth situations. This paper also describes an objective model for audiovisual quality estimation that combines the outcomes from audio and video metrics into a joint parametric model. The correlation between predicted and subjective MOS was computed using several outcomes (Pearson and Spearman correlation coefficients, Root Mean Square Error (RMSE) and epsilon-insensitive RMSE). The obtained results indicate that the proposed approach is a viable solution for objective audiovisual quality assessment in the context of live music streaming over mobile network.info:eu-repo/semantics/acceptedVersio

    Predicting Single Observer’s Votes from Objective Measures using Neural Networks

    Get PDF
    The last decades witnessed an increasing number of works aiming at proposing objective measures for media quality assessment, i.e. determining an estimation of the mean opinion score (MOS) of human observers. In this contribution, we investigate a possibility of modeling and predicting single observer’s opinion scores rather than the MOS. More precisely, we attempt to approximate the choice of one single observer by designing a neural network (NN) that is expected to mimic that observer behavior in terms of visual quality perception. Once such NNs (one for each observer) are trained they can be looked at as ”virtual observers” as they take as an input information about a sequence and they output the score that the related observer would have given after watching that sequence. This new approach allows to automatically get different opinions regarding the perceived visual quality of a sequence whose quality is under investigation and thus estimate not only the MOS but also a number of other statistical indexes such as, for instance, the standard deviation of the opinions. Large numerical experiments are performed to provide further insight into a suitability of the approach

    State of the art of audio- and video based solutions for AAL

    Get PDF
    Working Group 3. Audio- and Video-based AAL ApplicationsIt is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living (AAL) technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters (e.g., heart rate, respiratory rate). Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals (e.g., speech recordings). Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary 4 debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach. This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users. The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely (i) lifelogging and self-monitoring, (ii) remote monitoring of vital signs, (iii) emotional state recognition, (iv) food intake monitoring, activity and behaviour recognition, (v) activity and personal assistance, (vi) gesture recognition, (vii) fall detection and prevention, (viii) mobility assessment and frailty recognition, and (ix) cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted. The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed.publishedVersio

    State of the Art of Audio- and Video-Based Solutions for AAL

    Get PDF
    It is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters. Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals. Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach. This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users. The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely lifelogging and self-monitoring, remote monitoring of vital signs, emotional state recognition, food intake monitoring, activity and behaviour recognition, activity and personal assistance, gesture recognition, fall detection and prevention, mobility assessment and frailty recognition, and cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted. The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed
    corecore